60 research outputs found

    Smart Contract Upgradeability on the Ethereum Blockchain Platform: An Exploratory Study

    Full text link
    Context: Smart contracts are computerized self-executing contracts that contain clauses, which are enforced once certain conditions are met. Smart contracts are immutable by design and cannot be modified once deployed, which ensures trustlessness. Despite smart contracts' immutability benefits, upgrading contract code is still necessary for bug fixes and potential feature improvements. In the past few years, the smart contract community introduced several practices for upgrading smart contracts. Upgradeable contracts are smart contracts that exhibit these practices and are designed with upgradeability in mind. During the upgrade process, a new smart contract version is deployed with the desired modification, and subsequent user requests will be forwarded to the latest version (upgraded contract). Nevertheless, little is known about the characteristics of the upgrading practices, how developers apply them, and how upgrading impacts contract usage. Objectives: This paper aims to characterize smart contract upgrading patterns and analyze their prevalence based on the deployed contracts that exhibit these patterns. Furthermore, we intend to investigate the reasons why developers upgrade contracts (e.g., introduce features, fix vulnerabilities) and how upgrades affect the adoption and life span of a contract in practice. Method: We collect deployed smart contracts metadata and source codes to identify contracts that exhibit certain upgrade patterns (upgradeable contracts) based on a set of policies. Then we trace smart contract versions for each upgradable contract and identify the changes in contract versions using similarity and vulnerabilities detection tools. Finally, we plan to analyze the impact of upgrading on contract usage based on the number of transactions received and the lifetime of the contract version

    Open Challenges of Interactive Video Search and Evaluation

    Full text link
    During the last 10 years of Video Browser Showdown (VBS), there were many different approaches tested for known-item search and ad-hoc search tasks. Undoubtedly, teams incorporating state-of-the-art models from the machine learning domain had an advantage over teams focusing just on interactive interfaces. On the other hand, VBS results indicate that effective means of interaction with a search system is still necessary to accomplish challenging search tasks. In this tutorial, we summarize successful deep models tested at the Video Browser Showdown as well as interfaces designed on top of corresponding distance/similarity spaces. Our broad experience with competition organization and evaluation will be presented as well, focusing on promising findings and also challenging problems from the most recent iterations of the Video Browser Showdown

    Prototyping a Web-Scale Multimedia Retrieval Service Using Spark

    Get PDF
    International audienceThe world has experienced phenomenal growth in data production and storage in recent years, much of which has taken the form of media files. At the same time, computing power has become abundant with multi-core machines, grids, and clouds. Yet it remains a challenge to harness the available power and move toward gracefully searching and retrieving from web-scale media collections. Several researchers have experimented with using automatically distributed computing frameworks, notably Hadoop and Spark, for processing multimedia material, but mostly using small collections on small computing clusters. In this article, we describe a prototype of a (near) web-scale throughput-oriented MM retrieval service using the Spark framework running on the AWS cloud service. We present retrieval results using up to 43 billion SIFT feature vectors from the public YFCC 100M collection, making this the largest high-dimensional feature vector collection reported in the literature. We also present a publicly available demonstration retrieval system, running on our own servers, where the implementation of the Spark pipelines can be observed in practice using standard image benchmarks, and downloaded for research purposes. Finally, we describe a method to evaluate retrieval quality of the ever-growing high-dimensional index of the prototype, without actually indexing a web-scale media collection

    The Quality vs. Time Trade-off for Approximate Image Descriptor Search

    Get PDF
    International audienceIn recent years, content-based image retrieval has become more and more important in many application areas. Similarity retrieval is inherently a very demanding process, in particular when performing exact searches. Therefore, there is an increasing interest in performing approximate searches, where result quality guarantees are traded for reduced query execution time. The goal of approximate retrieval systems should be to obtain the best possible result quality in the minimum amount of time. As a result, typical indexing strategies divide the data set into many data chunks. Minimizing the search time suggests to generate uniformly sized chunks to best overlap I/O costs with CPU costs. Maximizing quality, on the other hand, suggests to strongly limit the intra-chunk dissimilarity of data. The paper addresses the question to what extent guaranteeing the query processing time, using uniform chunk sizes, compromises the quality of the results, and vice versa. Using a large collection of 5 million 24-dimensions local descriptors computed over more than 50 thousand real life images, we show that minimizing the query processing time may in fact lead to better quality of the intermediate results

    Exquisitor: Breaking the Interaction Barrier for Exploration of 100 Million Images

    Get PDF
    International audienceIn this demonstration, we present Exquisitor, a media explorer capable of learning user preferences in real-time during interactions with the 99.2 million images of YFCC100M. Exquisitor owes its efficiency to innovations in data representation, compression, and indexing. Exquisitor can complete each interaction round, including learning preferences and presenting the most relevant results, in less than 30 ms using only a single CPU core and modest RAM. In short, Exquisitor can bring large-scale interactive learning to standard desktops and laptops, and even high-end mobile devices

    Aligning quality assurance at the course unit and educational program levels.

    Get PDF
    Quality assurance is a subject that has grown dramatically in importance in recent times. In previous work, we have described how the ACM Curricula can be used to support the Quality Assurance process of educational programs, using the Computer Science program at Reykjavik University as an example. Faculty members and employers of graduates participated in the process, that resulted in providing both detailed quantitative data and qualitative information. The assessment also raised awareness of how abstract topics and learning outcomes from an international standard can be used when revising the curricula of a particular course in a CS program. Quality assurance is indeed a continuous process, where the results of evaluations should be used to drive improvements. In this paper we focus on how a Database course was re-structured based on a recent quality assurance process

    A task category space for user-centric comparative multimedia search evaluations

    Get PDF
    In the last decade, user-centric video search competitions have facilitated the evolution of interactive video search systems. So far, these competitions focused on a small number of search task categories, with few attempts to change task category configurations. Based on our extensive experience with interactive video search contests, we have analyzed the spectrum of possible task categories and propose a list of individual axes that define a large space of possible task categories. Using this concept of category space, new user-centric video search competitions can be designed to benchmark video search systems from different perspectives. We further analyse the three task categories considered so far at the Video Browser Showdown and discuss possible (but sometimes challenging) shifts within the task category spac

    A task category space for user-centric comparative multimedia search evaluations

    Get PDF
    In the last decade, user-centric video search competitions have facilitated the evolution of interactive video search systems. So far, these competitions focused on a small number of search task categories, with few attempts to change task category configurations. Based on our extensive experience with interactive video search contests, we have analyzed the spectrum of possible task categories and propose a list of individual axes that define a large space of possible task categories. Using this concept of category space, new user-centric video search competitions can be designed to benchmark video search systems from different perspectives. We further analyse the three task categories considered so far at the Video Browser Showdown and discuss possible (but sometimes challenging) shifts within the task category spac

    Interactive video retrieval evaluation at a distance: comparing sixteen interactive video search systems in a remote setting at the 10th Video Browser Showdown

    Get PDF
    The Video Browser Showdown addresses difficult video search challenges through an annual interactive evaluation campaign attracting research teams focusing on interactive video retrieval. The campaign aims to provide insights into the performance of participating interactive video retrieval systems, tested by selected search tasks on large video collections. For the first time in its ten year history, the Video Browser Showdown 2021 was organized in a fully remote setting and hosted a record number of sixteen scoring systems. In this paper, we describe the competition setting, tasks and results and give an overview of state-of-the-art methods used by the competing systems. By looking at query result logs provided by ten systems, we analyze differences in retrieval model performances and browsing times before a correct submission. Through advances in data gathering methodology and tools, we provide a comprehensive analysis of ad-hoc video search tasks, discuss results, task design and methodological challenges. We highlight that almost all top performing systems utilize some sort of joint embedding for text-image retrieval and enable specification of temporal context in queries for known-item search. Whereas a combination of these techniques drive the currently top performing systems, we identify several future challenges for interactive video search engines and the Video Browser Showdown competition itself
    corecore